Cognitive, Affective, & Behavioral Neuroscience
○ Springer Science and Business Media LLC
Preprints posted in the last 90 days, ranked by how well they match Cognitive, Affective, & Behavioral Neuroscience's content profile, based on 25 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit.
Kalburge, I.; Dallstream, A.; Josic, K.; Kilpatrick, Z. P.; Ding, L.; Gold, J. I.
Show abstract
Decisions based on evidence accumulated over time require rules governing when to end the accumulation process and commit to a choice. These rules control inherent trade-offs between decision speed and accuracy, which require careful balance to maximize quantities that depend on both like reward rate. We previously showed that, to maximize reward rate, normative decision rules adapt to changing task conditions (Barendregt et al., 2022). Here we used a novel task to examine whether and how people use adaptive rules for individual decisions under a variety of conditions, including changes in decision outcomes across trials and changes in evidence quality both across and within trials. We found that the participants tended to use rules that adjusted, at least partially, to predictable changes in task conditions to improve reward rate, consistent with a rationally bounded implementation of normative principles. These findings help inform our understanding of the extent and limits of flexible decision formation in the brain.
Nagisa, S.; Oblak, E.; Shimojo, S.; Shibata, K.
Show abstract
Multitasking is generally regarded as detrimental to performance. This deterioration effect is typically explained by the interference among tasks due to the limited capacity of information-processing resources, which in turn reduces the performance in each task. Contrary to this general view, we report evidence for a facilitation effect of multitasking on performance. This facilitation effect was observed in multitasking on a handgrip muscular endurance task and cognitive task, which are known to have little interference with each other. Specifically, we found that performance in the endurance task was facilitated with the difficulty of the concurrent cognitive task. This facilitation effect was mediated by additional pupil dilation due to the cognitive task. Increased effort with the difficulty of the cognitive task cannot explain the facilitated performance in the irrelevant endurance task. Instead, they suggest that the cognitive task elevated overall arousal to a level unattainable by the endurance task alone, which in turn facilitated performance in the irrelevant endurance task. To further test this arousal account, we manipulated participants motivation to the cognitive task by reward without changing its difficulty and found the same pattern of results. Thus, it is not effort or motivation specific to the cognitive task but rather overall arousal level that underlies the facilitation effect. These results unveiled a previously overlooked mechanism: a multitasking-induced arousal boost. Our findings suggest that multitasking can facilitate performance when the net effect of adding a concurrent task is governed less by the capacity limitation and more by the elevation of overall arousal.
Knobloch, S.; Jansen, T.; Hille, L.; Mueller, M.; Rumpf, L.; Haaker, J.
Show abstract
By relying on the observation of others experiences, humans learn about threat while avoiding harmful experiences. Yet, previous neuroscience research has focused on observational threats that are predictable. While the neurobiological distinction between temporally predictable (cued) and unpredictable (contextual) threats has been well-characterized in firsthand learning. In this study, we developed a novel observational paradigm in which participants learned from predictable (P) and unpredictable (U) observational threats, as well as a no-threat (N) condition and encountered the same conditions during an expression phase based on the NPU paradigm to investigate how the brain encodes predictable and unpredictable threat cues observed in others. Participants in Experiment 1 (n=20, male and female) and Experiment 2 (n=23, male and female) successfully learned threat contingencies, showing heightened threat expectations for predictable cues and unpredictable contexts. This converged with neural (fMRI, Experiment 2) responses in the anterior insula during the expression phase. Reflecting the dynamic process of learning, the amygdala responded to predictable threat cues with a linear decrease across trials. Interestingly, we found that responses to others pain was enhanced within the amygdala, insula and hippocampus, when participant could learn to predict threats, as compared to unpredictable conditions. Our findings suggest that humans learn to resolve temporal uncertainty, relying solely on observation, which thereby lays a foundation to the concept of fear and anxiety in social groups.
DallaVecchia, A.; Zink, N.; O'Connell, S. R.; Betts, S. S.; Noah, S.; Hillberg, A.; Oliva, M. T.; Reid, R. C.; Cohen, M. S.; Simpson, G. V.; Karalunas, S. L.; Calhoun, V. D.; Lenartowicz, A.
Show abstract
Historically, neural variability observed during task was interpreted as "noise," assumed to obscure meaningful signal and thus something to be minimized both analytically by researchers and functionally by the brain. Changes to this signal-to-noise ratio have been proposed as a possible neural mechanism behind the increased reaction-time variability (RTV) in attention deficit hyperactivity disorder (ADHD). However, not all variability is the same - in some cases, variability can have some underlying "statistical structure" that can be beneficial to information processing. The challenge lies in distinguishing meaningful variability from random noise. The edge-of-synchrony critical point, which describes a system poised between synchronous and asynchronous regimes, could be a good theoretical framework to study these different types of neural variability. In this study, we investigate whether changes in criticality and oscillatory dynamics preceded slower behavioral responses during a bimodal continuous performance task in ADHD. We find evidence that, prior to slower responses, neural dynamics shift toward criticality in both ADHD and control groups, suggesting that increase variability in ADHD and during attention lapses are related to structured variability and not necessarily random noise. Notably, these findings run counter predictions based on the proposed model and previous literature on neural noise in this population, challenging predictions of edge-of-synchrony criticality as a unifying account of neural variability and behavioral performance. Furthermore, this effect did not emerge at the between-subject level, underscoring the limitations of relying on between-subject correlations to infer neural mechanisms. Impact StatementOur findings add new perspective to the hypothesis that links neural variability to reaction time variability in adults with and without ADHD. We found that neural dynamics shift towards criticality prior to slow reaction times in adults with and without ADHD, but in ADHD, dynamics lie closer to criticality regardless of response type, suggesting a different "attractor" state.
Grote, L. A.; Schneider, D.; Wascher, E.; Arnau, S.
Show abstract
Sense of agency (SoA), the experience of controlling ones actions and their consequences, is crucial for self-representation and adaptive goal-directed behavior. Classic comparator models explain SoA as the match between predicted and actual sensorimotor outcomes, whereas inference-based and Bayesian accounts emphasize cue integration and probabilistic weighting. Besides the influence of action-outcome contingencies on SoA, the feedback effect of perceived SoA on cognitive processing is also crucial for cognitive performance. Much of todays cognitive work is performed through interaction with devices that are not entirely reliable or are prone to operator error. Against this background, it is of particular interest whether the impact of an expectancy violation differs depending on whether the outcome is attributed to a malfunctioning system or to ones own mistake. To investigate this, the present EEG study deploys manipulated performance feedback in a color-discrimination task, while EEG was recorded. Thirty-five participants performed in this task with periods of veridical feedback, periods with feedback simulating an increased error rate, and periods of feedback suggesting malfunctioning response buttons. Behavioral performance was decomposed using the EZ-diffusion model, and time-frequency EEG analyses focused on event-related alpha, beta, and theta oscillations. The participants responded significantly slower in the self-attribution of errors condition compared to neutral feedback, and also significantly slower in the system-attribution of errors condition compared to self-attribution of errors. Decomposing behavior using drift-diffusion modeling indicates that a general decrease of response times with manipulated feedback can be attributed to decreased drift rates, whereas the difference between the self and system error conditions are reflected in the non-decision time. In the EEG, the manipulated feedback was reflected in attenuated decreases of occipital alpha and sensorimotor beta power during the cue-target interval. Furthermore, system-versus self-attributed errors elicited stronger feedback-locked midfrontal theta responses. Our findings suggest a functional dissociation within the agency inference process, where perceived controllability regulates preparatory investment of cognitive resources, while the attribution of action-outcome discrepancies seem to modulate sensory processes or motor-execution.
Seo, S.; Lee, S.; Lee, N.; Kim, S.-P.
Show abstract
Choice overload occurs when an ever-growing number of options impairs decision quality, because evaluating options taxes cognitive resources. We investigated whether reducing cognitive demand could mitigate overload by encouraging greater cognitive effort to achieve optimal choice. We conducted two experiments manipulating cognitive demand in complementary ways: Experiment 1 reduced demand by presenting high-attractiveness sets, and Experiment 2 did so by providing a shortlist tool. In both experiments, participants chose from sets of 6-24 options while their eye-gaze and electroencephalographic (EEG) data were recorded. We found that reducing demand made decisions faster, but did not improve choice performance as set-size increased. Under low-demand conditions, eye-gaze measures revealed narrower search and EEG measures showed reduced working memory engagement per option, together indicating less searching and processing efforts. These results suggest that even with reduced cognitive demand, people coast through easier decisions, conserving effort and leaving the choice overload effect largely intact.
Tang, R.; Tan, J.; Gao, Y.; Lin, C.; Gan, J.; Ding, X.; Gao, D.
Show abstract
Cooperative behavior is a cornerstone of human interaction. Although both "betrayal aversion" (the affective cost of being betrayed) and "loss aversion" (the financial detriment incurred from betrayal) are established determinants of cooperative behavior, their relative potency remains undetermined. Here, we investigated these effects by integrating computational modeling and event-related potential (ERP) techniques. In two tasks involving risk and cooperation, participants decided whether to take financial risks or to cooperate under possible betrayal. Our results showed that betrayal aversion had a stronger effect on reducing cooperation compared to loss aversion. Furthermore, ERP data demonstrated sequential processing: betrayal was encoded early in decision-making, reflected by increased P3 with weaker betrayal aversion, whereas loss aversion manifested later, marked by increased LPP. By dissociating the contributions of betrayal and loss, our finding provides novel insights into the cognitive and neural mechanisms underlying cooperative behavior.
Wirth, L. A.; Sadedin, N.; Meder, B.; Schad, D. J.
Show abstract
BackgroundPavlovian responding is a core component of behavior and can be measured via Pavlovian-instrumental transfer (PIT), where Pavlovian responses bias instrumental actions. Standard single-lever PIT paradigms, which assess responses using a single-choice option, cannot dissociate the contribution of model-free versus model-based reinforcement learning. While indirect evidence suggests a role for model-free responding in single-lever PIT, the contribution of model-based strategies is unclear. It also remains unknown whether internal cognitive states, such as mind wandering, impair specifically model-based but not model-free PIT, as is theoretically expected. MethodsWe developed a novel, trial-by-trial two-stage PIT paradigm designed to computationally dissociate model-free and model-based Pavlovian responding by leveraging probabilistic state transitions and trial-wise outcome predictions. After each two-stage Pavlovian learning trial, participants performed a single-lever PIT trial as well as a query trial of explicit value judgment. Detailed task instructions were provided to support potential model-based strategies. Computational modeling was used to quantify individual learning strategies. We assessed mind-wandering questionnaires and thought probes. ResultsAnalysis of query and PIT trials revealed trial-by-trial updating of outcome expectations based on probabilistic task structure, consistent with model-based Pavlovian responding. Behavioral responses during PIT were best explained by a computational model-based reinforcement learning model. In contrast, we found little evidence for model-free Pavlovian responding. Higher levels of mind wandering were associated with reduced model-based control but did not impact model-free indices. ConclusionWe introduce a novel single-lever PIT paradigm that enables fine-grained dissociation of model-free versus model-based Pavlovian response systems. Our findings provide evidence that single-lever PIT can operate through model-based mechanisms, challenging the assumption that single-lever PIT is predominantly model-free. Our findings also indicate that internal attentional states selectively modulate model-based PIT. Given the involvement of Pavlovian responding in numerous psychiatric conditions, our paradigm offers new avenues for understanding maladaptive behavior. Author SummaryOur daily actions are often influenced by cues like the smell of food or the sound of phone notifications that signal potential rewards or losses. These Pavlovian cues can shape our instrumental behavior even though their outcomes do not depend on what we do - a process known as Pavlovian-instrumental transfer (PIT). Here we study the computational learning mechanisms that underlie such PIT effects. While it is often assumed that Pavlovian responding follows simple, automatic rules without a cognitive model of cue consequences (i.e., model-free), evidence also shows a role for cognitive anticipations in Pavlovian responding (i.e., model-based). In this study, we extend this evidence by showing that PIT responding can be driven by flexible model-based learning. We designed a task to test whether participants use model-free versus model-based strategies to guide PIT, providing detailed task instructions. Using reinforcement learning models, we found that most participants used model-based learning when forming cue-outcome associations. Importantly, peoples attention mattered: when they were more distracted and doing mind wandering, they relied less on model-based strategies. Our findings suggest that Pavlovian learning is complex, flexible, and influenced by internal mental states, opening new windows to understand decision-making problems in mental health conditions like addiction.
Laing-Young, J. M.; Savage, C. R.; Tomaso, C.; Neta, M.; Nelson, T. D.; Schultz, D. H.
Show abstract
Obesity is a growing public health concern with more than 40% of adults meeting criteria for obesity in the United States. Although many treatments seek to lower individuals weight, few treatments have focused on cognitive strategies to change the way individuals think about food, therefore, decreasing consumption of non-nutrient-dense foods. Cognitive reappraisal is one strategy that involves changing the way one thinks about a situation and can be used to downregulate responses to those stimuli. Leveraging this intuitive, cost-effective strategy to decrease ones desire to eat unhealthy food and therefore, decrease overeating, could improve physical and mental health. The present study identified brain regions that are differentially activated when using cognitive reappraisal to downregulate responses to food (FR) versus when using the same strategy to downregulate negative emotions (ER). We collected functional magnetic resonance imaging (fMRI) data in 63 undergraduate students while participants completed both tasks. There was increased reappraisal-related activation in widespread regions across both tasks, including in expected subcortical (i.e., striatum) and cortical areas (i.e., visual, frontoparietal). We also found domain-specific activity, with greater insula activation in the FR than the ER task and greater hippocampal activation in the ER than the FR task. These results reveal domain-general and domain-specific effects of cognitive reappraisal in FR and ER tasks that inform future work examining eating behavior. Taken together, a better explication of the overlapping and discrete processes of food regulation, as it compares to other applications of this regulatory strategy can inform new intervention targets.
Mauter, G.; Liljeholm, M.
Show abstract
Normative social conformity has been proposed to elicit a hedonic reward signal that is dissociable from informational inferences about decision outcomes. If present, such a signal should reinforce not just the decision that preceded it, but also any incidentally co-occurring stimulus features. Alternatively, normative conformity might reflect a non-hedonic imitation algorithm. Across two studies (n=359) we used a non-deceptive multi-participant gambling task in which trial-by-trial information was provided about the selections and monetary payoffs of two other participants facing the same, recurring, options in real time. Consistent with both accounts, and contrary to mere monetary maximization, the probability of staying with a losing option increased with the degree of decision unanimity. However, contrary to the social reward hypothesis, only monetary payoffs modulated the valence of incidental gambling stimuli. A prosocial framing did not significantly alter this pattern of results, which favors an imitative over a hedonic account of normative social conformity.
Tzionit, N.; Filmon, D. G.; Maeir, T.; Boettcher, S. E. P.; Nobre, A. C.; Shalev, N.; Landau, A. N.
Show abstract
Attention-deficit/hyperactivity disorder (ADHD) has been associated with atypical temporal processing across multiple cognitive domains. However, most evidence derives from simplified paradigms that isolate timing from spatial behaviour. Here, we examine how temporal prediction operates within a continuous, dynamic visual environment. Using the Dynamic Visual Search (DVS) task, we embedded spatiotemporal regularities into a sustained stream of visual events, allowing observers to implicitly learn and anticipate predictable targets. Continuous mouse tracking provided a fine-grained measure of action planning beyond discrete reaction time and accuracy metrics. Young adults diagnosed with ADHD (N=40) were compared to matched neurotypical controls (N=38). Both groups benefited from target predictability and reduced distractor load, indicating intact early spatiotemporal learning in ADHD. Across the duration of the task, however, the groups diverged. Neurotypical participants showed progressive increases in behavioural benefits from prediction, accompanied by increasingly direct and efficient mouse trajectories. In contrast, individuals with ADHD reached a plateau in prediction benefits midway through the experiment. Their performance remained stable, with minimal evidence of resource depletion, but did not show further optimisation based on learned regularities. These findings suggest that while prediction formation is preserved in ADHD, its progressive utilisation across longer timescales is attenuated. Rather than reflecting a primary deficit in learning or sustained attention, ADHD may involve altered long-timescale integration or weighting of predictive information in dynamic environments.
Zareba, M. R.; Gonzalez-Garcia, I.; Ibanez Montolio, M.; Binney, R. J.; Hoffman, P.; Visser, M.
Show abstract
Excessive self-blaming emotions are commonly observed in anxiety disorders, with qualitatively similar symptomatology reported in subclinical populations. Interpretation of moral information requires assessing the social conceptual information, a process overseen by the superior anterior temporal lobe (sATL). Feelings of self-blame evoke interactions of sATL and socio-affective regions, and previous research shows that subclinical anxiety modulates the organisation of the self-blame circuitry. This study aimed to extend these findings by exploring links of trait-anxiety with (i) self-blaming emotions and associated behaviours in an experimental task, and (ii) self-blame-dependent neural activity and connectivity, as observed during reliving of autobiographical guilt memories. We also explored the role of resting-state fMRI in linking these phenomena. Increased anxiety was linked to stronger self-blaming emotions, and more pronounced self-attacking and hiding. When experiencing negative emotions about themselves (i.e. shame and self-anger), anxious individuals were also less likely to disengage from self-focused thoughts. These behavioural findings were paralleled by enhanced self-blame-related connectivity between the left sATL and bilateral posterior subgenual cingulate cortex. Distinct patterns of activity and connectivity within the ATL-related circuitry were furthermore linked to individual differences in intensity of the self-blaming emotions and approach-avoidance motivation towards the guilt memories. As such, the results of the current study link stronger self-blaming emotions in anxious individuals with specific maladaptive patterns of behaviour. Furthermore, the work provides robust evidence for the important role of ATL-related circuitry in self-blame processing, supporting its broader involvement in social conceptual processing and its alterations in subclinical anxiety.
Fumery, T.; Chaise, F.; Soille Hambye, A.; Fievez, F.; Lambert, J.; Vassiliadis, P.; Derosiere, G.; Duque, J.
Show abstract
Everyday decisions unfold dynamically, with commitment shaped by a growing sense of urgency that can, when excessive, contribute to impulsive choices. Here we aimed at dissociating two modes of urgency regulation, control-driven (accuracy-oriented) and reward-driven (motivation-based), and asked whether their relative influence varies across individuals differing in impulsivity. We further investigated how these regulatory modes are implemented in the motor system, focusing on two modulatory effects: surround inhibition and broad modulation. Healthy participants, whose impulsivity was assessed with the UPPS urgency dimension, performed a modified Tokens task crossing control demands (low vs high control blocks) with motivational context (low vs high reward trials). In two separate sessions, single-pulse TMS was applied either over the hand motor representation to probe corticospinal excitability indexing surround inhibition, or over the leg representation to index broad modulations of motor activity. This design successfully dissociated the two regulatory modes: control-driven adjustments (across blocks) were most evident in less impulsive participants, whereas reward-driven adjustments (across trials) were most evident in more impulsive participants. Consistent with this dissociation, control-driven urgency regulation was associated with broad modulation of motor activity, whereas reward-driven urgency adjustments were associated with changes in surround inhibition. These motor signatures may serve as probes of the respective contributions of control- and reward-driven regulation even when they are not explicitly dissociated. Our findings suggest that impulsivity may not simply reflect "more urgency" but a different weighting of the influences that shape it during decision making, a hypothesis that can now be tested in clinical conditions.
Ma, H.; Fennema, D.; Simblett, S.; Zahn, R.
Show abstract
AimsDue to the multifaceted nature of "impulsivity", its measurement remains fragmented. Here, we developed the Risky Social Choices task to provide evidence for its validity and reliability, while testing the hypothesis that impaired access to implicit knowledge of negative long-term consequences is of distinct importance for "impulsive" decision-making in a general population sample. MethodsForty participants chose whether to engage in risk-taking behaviors, which combined web-based AI-generated videos with narrated hypothetical scenarios and measured worries related to negative long-term consequences, approach-related motivation for short-term rewards, response time to and accuracy of recognizing degraded auditory prime words denoting negative long-term consequences. ResultsA pre-registered multi-step regression model was constructed with worry, motivation, response time and accuracy as predictors and percentage of risky choices as the outcome. Among all predictors, only prime word recognition accuracy was significantly negatively associated with risky choices, confirming our hypothesis of the role of reduced implicit access to negative long-term consequences in risk-taking decisions. In contrast, approach-related motivation for rewards was the only predictor significantly positively related to percentage of risky choices. DiscussionAs predicted, the negative association between risky choices and implicit access to negative long-term consequences supports its role as a distinct aspect of "impulsivity". The novel task successfully captured this aspect, paving the way for a more precise neurocognitive characterization of clinical conditions where "impulsivity" plays a key role. The findings unveil the importance of implicit social sequential knowledge for impulsivity in neurotypical populations, so far only investigated in patients with brain lesions.
Smith, E.; Theis, H.; van Eimeren, T.; Knauth, K. H. K.; Tuzsus, D.; Mathar, D.; Peters, J.
Show abstract
Dopamine (DA) has been implicated in exploration-exploitation behaviour, i.e., exploring novel, potentially better options vs. exploiting known, previously rewarding options. Impairments in this trade-off occur in psychiatric disorders involving DAergic dysfunction, including addiction and schizophrenia. Pharmacological studies revealed a contribution of DA to exploration, but inconsistent findings suggest that interindividual variability in baseline DA may modulate effects. To address this, we investigated the effects of the DA precursor L-DOPA on exploration-exploitation during reinforcement learning in a sample of N = 75 healthy participants (n = 32 women), following a randomised, double-blind, placebo-controlled, pre-registered design (https://osf.io/p2r7u). We assessed whether putative baseline DA markers, including spontaneous eye blink rate, working memory (WM) capacity, and impulsivity, modulated drug effects and probed visual fixation patterns and pupil dilation as markers of exploration. L-DOPA had no overall effect on computational model parameters of random exploration, directed exploration or choice perseveration. WM capacity moderated drug effects on random exploration, with stronger effects at higher WM capacity. Remaining DA proxies showed no credible effects. Pooling the data from male participants with that from an earlier male-only study (Chakroun et al., 2020; total N = 74), L-DOPA increased uncertainty-dependent value weighting and perseveration strength, while decreasing habit updating, indicating a stronger tendency to repeat previous choices and slower decay of their influence over time. No credible drug effects were observed in female participants. Pupil dilation was tonically increased under L-DOPA and scaled with exploration behaviour and prediction error, confirming that pupillometry can index exploration-exploitation dynamics. Visual exploration patterns reflected uncertainty-driven sampling, but were unaffected by L-DOPA. Taken together, results suggest that DAergic modulation of exploration and perseveration behaviour may be contingent on cognitive capacity and sex, rather than exerting uniform effects across individuals.
Michiels, M.
Show abstract
Habits in humans are commonly studied through outcome devaluation paradigms, but most existing tasks fail to capture the robustness of habitual behavior seen in animal models. I introduce two novel behavioral tasks designed to overcome these limitations. In the first task, ("shooting aliens task", n = 45), I simplified an existing instrumental learning task and implemented a novel intra-block reversal method in which stimulus positions changed unexpectedly within blocks while maintaining the same stimulus-action mappings. Participants also completed a classical devaluation phase with explicit reward changes. In the second task ("hands-attack task", n = 44), which relied on real-life avoidance behavior, devaluation was achieved by reversing reward contingencies and allowing participants to inhibit the dominant avoidance response in favor of a more effortful counterattack. Across both tasks, overtrained conditions led to more errors and longer response times after devaluation, confirming increased insensitivity to outcome change. Intra-block reversals in the shooting aliens task produced stronger habitual signatures than standard whole-block devaluation, revealing a greater cost of overriding automatic responses. In the hands-attack task, even without prior training, participants showed clear markers of habitual behavior, suggesting that real-world action patterns can replicate key features of laboratory habits. Interestingly, participants were more accurate in overriding overtrained responses when attacks were highly familiar, possibly due to enhanced perceptual processing, although this came at the cost of longer response times. These findings introduce two complementary tools that address key limitations in current paradigms: the intra-block reversal increases habit sensitivity without inflating working memory demands, while the hands-attack task captures naturalistic habit expression without artificial training, using a single, ecologically valid session. Both are suited for clinical applications, particularly where time constraints or cognitive load limit the feasibility of traditional approaches.
Gelebart, J.; Digonet, G.; Jacquet, T.; Ruffino, C.; Debarnot, U.
Show abstract
Mental fatigue (MF) arises from sustained cognitive load and produces a multisystem signature spanning subjective experience, task performance, cortical oscillations, and oculomotor dynamics. It may alter higher-order cognitive functions essential to everyday life, underscoring the need for preventive strategies. Although moderate aerobic exercise (EXO) facilitates recovery from MF, its influence on the onset and expression of MF when performed beforehand remains unexplored. This study provided a multimodal characterization of MF, assessed its impact on associative memory and divergent creativity, and examined whether prior EXO modulated these outcomes. Twenty-nine participants completed either 15 min of EXO or rest before a 35-min MF-inducing Time Load Dual Back task. Subjective fatigue and effort, performance, EEG activity, and eye-blink rate were continuously recorded; associative memory and divergent creativity were assessed pre-intervention and post-MF. Both groups showed progressive increases in MF and effort from 7 min onward, stable performance, and a rise in parieto-central alpha power at 18 min. The EXO group exhibited higher frontal-medial theta power and stable blink rates, whereas blink rate in REST increased at 21 min. EXO did not prevent subjective MF nor influence behavioral stability but modulated neurophysiological markers potentially related to compensatory control and dopaminergic regulation. Associative memory remained preserved in both groups, whereas creative flexibility increased in REST but not EXO, suggesting MF-related disinhibition in the former and preserved inhibitory control in the latter. These findings refine temporal and multimodal profile of MF and highlight the need to optimize exercise parameters and task demands to enhance preventive efficacy and guide interventions.
Dirupo, G.; Westwater, M. L.; Khaikin, S.; Feder, A.; DePierro, J. M.; Charney, D. S.; Murrough, J. W.; Morris, L. S.
Show abstract
Deficits in inhibitory control are common across a wide range of psychiatric disorders and are closely linked to symptom severity, including emotional dysregulation, anxiety, substance misuse, and self-harm, making them an appealing target for intervention. Cognitive training offers a low-cost, scalable, and non-invasive strategy to strengthen inhibitory control; however, most existing paradigms target only a single facet of inhibition and rarely account for environmental influences, such as affective context. To address these gaps, we developed a computerized inhibitory control training paradigm to simultaneously engage three components of inhibition: preemptive, proactive, and reactive, while embedding trials within positive and negative affective contexts to assess the impact of emotional stimuli. Across two online experiments, participants completed the GAMBIT task in one session (Experiment 1, N = 300) or repeated over three sessions (Experiment 2, N = 65). The task included No-Go trials to train preemptive inhibition, stop-signal trials for reactive inhibition, and stop-signal anticipation trials to train proactive inhibition. Affective images of differing valence were presented as background stimuli to evaluate their impact on inhibitory performance. In Experiment 1, participants showed higher accuracy on No-Go versus reference Go trials ({beta}=1.45, SE=0.09, p<.001), confirming successful manipulation of preemptive inhibition. Reaction times were slower during anticipation trials across two different conditions ({beta}=0.16, SE=0.04, p<.001; {beta} = 0.07, SE = 0.04, p = 0.047), consistent with proactive slowing when anticipating a potential stop signal. Additionally, positive affective images ({beta} = 0.10, SE= 0.009, p < 0.001) further slowed RTs, indicating emotional interference with proactive control. In Experiment 2, the pattern of higher No-Go accuracy was replicated ({beta} = 0.91, SE = 0.11, p < .001) and accuracy generally improved over sessions ({beta} = 0.38, SE = 0.06, p < .001). In anticipation trials, RTs become shorter across sessions (session 2: {beta} = -0.25, SE = 0.06, p < .001; session 3: {beta} = -0.45, SE = 0.06, p < .001), reflecting practice-related gains, and SSRTs decreased over time (F(2,56) = 6.26, p = .004), consistent with enhanced reactive inhibition. Proactive inhibition was modulated by affective images, with both negative ({beta} = 0.04, SE = 0.02, p = .039) and positive ({beta} = 0.16, SE = 0.02, p < .001) affective images associated with slower RTs. Participants also reported reductions in self-assessed temper control by the last session (W = 25.5, p = .007, q = .037, d = -0.51) and usability ratings were high (all means [≥] 3.87/5). Together, these findings show that this paradigm recruits multiple forms of inhibitory control and yields training-related improvements in both performance and affective outcomes. This provides preliminary validation of a scalable, fully online inhibitory control training tool targeting multiple dissociable inhibitory processes within affective contexts. The approach holds promise as an accessible transdiagnostic intervention to support symptom improvement across psychiatric disorders, with future work needed to evaluate clinical efficacy in patient populations.
Tsuji, Y.; Kondo, I.; Shimada, S.
Show abstract
Mindfulness-based interventions are increasingly delivered online, yet evidence for short programs often relies on self-report outcomes. We tested whether a brief online mindfulness meditation training produces detectable changes in autonomic regulation during a standardized stress-to-meditation sequence. Healthy adults with no meditation experience were randomized to a four-week online mindfulness meditation program (MG) or an active health-management program (CG). Before and after training, participants completed a laboratory session consisting of rest, a mental arithmetic stress task, guided focused-attention breathing meditation, and post-rest while ECG was recorded. Across the training period, both groups showed reduced negative affective symptoms, but only the mindfulness group showed an increase in the Observing facet. Critically, frequency-domain HRV indices during the laboratory protocol showed a group-specific post-training pattern: MG exhibited lower LF/HF and higher normalized HF power (nHF) compared with pre-training, and MG differed from CG in the post-training session. Within MG, training-related improvement in FFMQ Non-reactivity was positively associated with nHF during the post-stress meditation period. These findings indicate that a brief online mindfulness program can modulate HRV during a stress-to-meditation context and that post-stress autonomic modulation during meditation covaries with acceptance-related skill acquisition.
Zhang, Y.; Dong, W.; Fu, K.; Zhou, M.; Qing, Y.; Chan, R. C. K.; Kendrick, K. M.; Yao, D.; Yao, S.; Becker, B.
Show abstract
Overarching conceptualizations propose a critical role of the default mode network (DMN) in self-referential mental time travel, particularly in autobiographical memory retrieval and episodic future thinking, and internal (intrinsic) emotion generation and regulation. However, these conceptualizations have not been directly evaluated. Against this background, the present fMRI study aimed to identify both shared and distinct neural systems underlying autobiographical episodic processing across different temporal contexts - specifically, episodic memory retrieval (EMR) and episodic future thinking (EFT) - and to examine how these systems interact with affective experiences, including valence and arousal. Our findings demonstrated the central role of the DMN - encompassing the medial prefrontal cortex (mPFC), posterior cingulate cortex (PCC), and medial temporal lobe (MTL) - in both EMR and EFT. Importantly, we identified a functional dissociation along both valence and temporal dimensions: the ventromedial prefrontal cortex (vmPFC) was more strongly associated with positive experiences and simulations, whereas the dorsomedial prefrontal cortex (dmPFC) was consistently engaged during the processing of negative affect across past and future contexts. Moreover, representational similarity and parametric analyses indicated that the hippocampus supports differential processing of valence and arousal across temporal domains. Together, these findings provide empirical evidence for the involvement of cortical midline core DMN systems in autobiographical processing across time and suggest overlapping and distinct systems for the integration of emotional experiences across mental time travel.